13 research outputs found

    Gaze-contingent perceptually enabled interactions in the operating theatre.

    Get PDF
    PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre

    LaryngoTORS: a novel cable-driven parallel robotic system for transoral laser phonosurgery

    Get PDF
    Transoral laser phonosurgery is a commonly used surgical procedure in which a laser beam is used to perform incision, ablation or photocoagulation of laryngeal tissues. Two techniques are commonly practiced: free beam and fiber delivery. For free beam delivery, a laser scanner is integrated into a surgical microscope to provide an accurate laser scanning pattern. This approach can only be used under direct line of sight, which may cause increased postoperative pain to the patient and injury, is uncomfortable for the surgeon during prolonged operations, the manipulability is poor and extensive training is required. In contrast, in the fiber delivery technique, a flexible fiber is used to transmit the laser beam and therefore does not require direct line of sight. However, this can only achieve manual level accuracy, repeatability and velocity, and does not allow for pattern scanning. Robotic systems have been developed to overcome the limitations of both techniques. However, these systems offer limited workspace and degrees-of-freedom (DoF), limiting their clinical applicability. This work presents the LaryngoTORS, a robotic system that aims at overcoming the limitations of the two techniques, by using a cable-driven parallel mechanism (CDPM) attached at the end of a curved laryngeal blade for controlling the end tip of the laser fiber. The system allows autonomous generation of scanning patterns or user driven freepath scanning. Path scan validation demonstrated errors as low as 0.054±0.028 mm and high repeatability of 0.027±0.020 mm (6×2 mm arc line). Ex vivo tests on chicken tissue have been carried out. The results show the ability of the system to overcome limitations of current methods with high accuracy and repeatability using the superior fiber delivery approach

    An eye-tracking based robotic scrub nurse: proof of concept

    Get PDF
    Background Within surgery, assistive robotic devices (ARD) have reported improved patient outcomes. ARD can offer the surgical team a “third hand” to perform wider tasks and more degrees of motion in comparison with conventional laparoscopy. We test an eye-tracking based robotic scrub nurse (RSN) in a simulated operating room based on a novel real-time framework for theatre-wide 3D gaze localization in a mobile fashion. Methods Surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses (ETG) assisted by distributed RGB-D motion sensors. To select instruments, surgeons (ST) fixed their gaze on a screen, initiating the RSN to pick up and transfer the item. Comparison was made between the task with the assistance of a human scrub nurse (HSNt) versus the task with the assistance of robotic and human scrub nurse (R&HSNt). Task load (NASA-TLX), technology acceptance (Van der Laan’s), metric data on performance and team communication were measured. Results Overall, 10 ST participated. NASA-TLX feedback for ST on HSNt vs R&HSNt usage revealed no significant difference in mental, physical or temporal demands and no change in task performance. ST reported significantly higher frustration score with R&HSNt. Van der Laan’s scores showed positive usefulness and satisfaction scores in using the RSN. No significant difference in operating time was observed. Conclusions We report initial findings of our eye-tracking based RSN. This enables mobile, unrestricted hands-free human–robot interaction intra-operatively. Importantly, this platform is deemed non-inferior to HSNt and accepted by ST and HSN test users

    Free-View, 3D Gaze-Guided, Assistive Robotic System for Activities of Daily Living

    No full text

    A novel gaze-controlled flexible robotized endoscope; preliminary trial and report

    No full text
    Background Interventional endoluminal therapy is rapidly advancing as a minimally invasive surgical technique. The expanding remit of endoscopic therapy necessitates precision control. Eye tracking is an emerging technology which allows intuitive control of devices. This was a feasibility study to establish if a novel eye gaze-controlled endoscopic system could be used to intuitively control an endoscope. Methods An eye gaze-control system consisting of eye tracking glasses, specialist cameras and a joystick was used to control a robotically driven endoscope allowing steering, advancement, withdrawal and retroflexion. Eight experienced and eight non-endoscopists used both the eye gaze system and a conventional endoscope to identify ten targets in two simulated environments: a sphere and an upper gastrointestinal (UGI) model. Completion of tasks was timed. Subjective feedback was collected from each participant on task load (NASA Task Load Index) and acceptance of technology (Van der Laan scale). Results When using gaze-control endoscopy, non-endoscopists were significantly quicker when using gaze-control rather than conventional endoscopy (sphere task 3:54 ± 1:17 vs. 9:05 ± 5:40 min, p = 0.012, and UGI model task 1:59 ± 0:24 vs 3:45 ± 0:53 min, p < .001). Non-endoscopists reported significantly higher NASA-TLX workload total scores using conventional endoscopy versus gaze-control (80.6 ± 11.3 vs 22.5 ± 13.8, p < .001). Endoscopists reported significantly higher total NASA-TLX workload scores using gaze control versus conventional endoscopy (54.2 ± 16 vs 26.9 ± 15.3, p = 0.012). All subjects reported that the gaze-control had positive ‘usefulness’ and ‘satisfaction’ score of 0.56 ± 0.83 and 1.43 ± 0.51 respectively. Conclusions The novel eye gaze-control system was significantly quicker to use and subjectively lower in workload when used by non-endoscopists. Further work is needed to see if this would translate into a shallower learning curve to proficiency versus conventional endoscopy. The eye gaze-control system appears feasible as an intuitive endoscope control system. Hybrid gaze and hand control may prove a beneficial technology to evolving endoscopic platforms

    Free-view, 3D gaze-guided, assistive robotic system for activities of daily living

    No full text
    Patients suffering from quadriplegia have limited body motion which prevents them from performing daily activities. We have developed an assistive robotic system with an intuitive free-view gaze interface. The user's point of regard is estimated in 3D space while allowing free head movement and is combined with object recognition and trajectory planning. This framework allows the user to interact with objects using fixations. Two operational modes have been implemented to cater for different eventualities. The automatic mode performs a pre-defined task associated with a gaze-selected object, while the manual mode allows gaze control of the robot's end-effector position on the user's frame of reference. User studies reported effortless operation in automatic mode. A manual pick and place task achieved a success rate of 100% on the users' first attempt

    Intuitive gaze-control of a robotized flexible endoscope

    No full text
    Flexible endoscopy is a routinely performed procedure that has predominantly remained unchanged for decades despite its many challenges. This paper introduces a novel, more intuitive and ergonomic platform that can be used with any flexible endoscope, allowing easier navigation and manipulation. A standard endoscope is robotized and a gaze control system based on eye-tracking is developed and implemented, allowing hands-free manipulation. The system characteristics and step response has been evaluated using visual servoing. Further, the robotized system has been compared with a manually controlled endoscope during a user study. The users (n=11) showed a preference for the gaze controlled endoscope and a lower task load when the task was performed with the gaze control. In addition, gaze control was related to a higher success rate and a lower time to perform the task. The results presented validate the system's technical performance and demonstrate the intuitiveness of hands-free gaze control in flexible endoscopy
    corecore